Ordinal decision problems are very common in real-life. As a result, ordinal classification models have drawn much attention\nin recent years. Many ordinal problem domains assume that the output is monotonously related to the input, and some ordinal\ndata mining models ensure this property while classifying. However, no one has ever reported how accurate these models are\nin presence of varying levels of non-monotone noise. In order to do that researchers need an easy-to-use tool for generating\nartificial ordinal datasets which contain both an arbitrary monotone pattern as well as user-specified levels of non-monotone\nnoise. An algorithm that generates such datasets is presented here in detail for the first time. Two versions of the algorithm\nare discussed. The first is more time consuming. It generates purely monotone datasets as the base of the computation. Later,\nnon-monotone noise is incrementally inserted to the dataset. The second version is basically similar, but it is significantly faster.\nIt begins with the generation of almost monotone datasets before introducing the noise. Theoretical and empirical studies of the\ntwo versions are provided, showing that the second, faster, algorithm is sufficient for almost all practical applications. Some\nuseful information about the two algorithms and suggestions for further research are also discussed.
Loading....